Stochastic graph recurrent neural network

نویسندگان

چکیده

Representation learning over dynamic graphs has attracted much attention because of its wide applications. Recently, sequential probabilistic generative models have achieved impressive results they can model data distributions. However, modeling the distribution is still extremely challenging. Existing methods usually ignore mutual interference stochastic states and deterministic states. Besides, assumption that latent variables follow Gaussian distributions inappropriate. To address these problems, we propose graph recurrent neural network (SGRNN), a for representation graphs. It separates in iterative process. improve flexibility variables, set prior posterior as semi-implicit DSI-SGRNN. In addition, to alleviate KL-vanishing problem SGRNN, simple interpretable structure proposed based on lower bound KL-divergence. The introduces few extra parameters be implemented with lines code modification. Extensive experiments real-world datasets demonstrate effectiveness model.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Recurrent Neural Network for Speech Recognition

This paper presents a new stochastic learning approach to construct a latent variable model for recurrent neural network (RNN) based speech recognition. A hybrid generative and discriminative stochastic network is implemented to build a deep classification model. In the implementation, we conduct stochastic modeling for hidden states of recurrent neural network based on the variational auto-enc...

متن کامل

Recurrent neural network training with preconditioned stochastic gradient descent

Recurrent neural networks (RNN), especially the ones requiring extremely long term memories, are difficult to training. Hence, they provide an ideal testbed for benchmarking the performance of optimization algorithms. This paper reports test results of a recently proposed preconditioned stochastic gradient descent (PSGD) algorithm on RNN training. We find that PSGD may outperform Hessian-free o...

متن کامل

A Recurrent Neural Network Model for Solving Linear Semidefinite Programming

In this paper we solve a wide rang of Semidefinite Programming (SDP) Problem by using Recurrent Neural Networks (RNNs). SDP is an important numerical tool for analysis and synthesis in systems and control theory. First we reformulate the problem to a linear programming problem, second we reformulate it to a first order system of ordinary differential equations. Then a recurrent neural network...

متن کامل

Recurrent Neural Network Regularization

We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include la...

متن کامل

Recurrent Neural Network Grammars

We introduce recurrent neural network grammars, probabilistic models of sentences with explicit phrase structure. We explain efficient inference procedures that allow application to both parsing and language modeling. Experiments show that they provide better parsing in English than any single previously published supervised generative model and better language modeling than state-of-the-art se...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neurocomputing

سال: 2022

ISSN: ['0925-2312', '1872-8286']

DOI: https://doi.org/10.1016/j.neucom.2022.05.105